140 research outputs found

    高濁度海域における海底堆積油の光学的検出手法

    Get PDF
    東京海洋大学修士学位論文 平成28年度(2016) 海洋環境保全学 第2559号指導教員: 荒川久幸全文公表年月日: 2019-04-16東京海洋大学201

    Deep Data Analysis on the Web

    Get PDF
    Search engines are well known to people all over the world. People prefer to use keywords searching to open websites or retrieve information rather than type typical URLs. Therefore, collecting finite sequences of keywords that represent important concepts within a set of authors is important, in other words, we need knowledge mining. We use a simplicial concept method to speed up concept mining. Previous CS 298 project has studied this approach under Dr. Lin. This method is very fast, for example, to mine the concept, FP-growth takes 876 seconds from a database with 1257 columns 65k rows, simplicial complex only takes 5 seconds. The collection of such concepts can be interpreted geometrically into simplicial complex, which can be construed as the knowledge base of this set of documents. Furthermore, we use homology theory to analyze this knowledge base (deep data analysis). For example, in mining market basket data with {a, b, c, d}, we find out frequent item sets {abc, abd, acd, bcd}, and the homology group H2 = Z (the integer Abelian group), which implies that very few customers buy four items together {abcd}, then we may analysis possible causes, etc

    Deep Data Analysis on the Web

    Get PDF
    Search engines are well known to people all over the world. People prefer to use keywords searching to open websites or retrieve information rather than type typical URLs. Therefore, collecting finite sequences of keywords that represent important concepts within a set of authors is important, in other words, we need knowledge mining. We use a simplicial concept method to speed up concept mining. Previous CS 298 project has studied this approach under Dr. Lin. This method is very fast, for example, to mine the concept, FP-growth takes 876 seconds from a database with 1257 columns 65k rows, simplicial complex only takes 5 seconds. The collection of such concepts can be interpreted geometrically into simplicial complex, which can be construed as the knowledge base of this set of documents. Furthermore, we use homology theory to analyze this knowledge base (deep data analysis). For example, in mining market basket data with {a, b, c, d}, we find out frequent item sets {abc, abd, acd, bcd}, and the homology group H2 = Z (the integer Abelian group), which implies that very few customers buy four items together {abcd}, then we may analysis possible causes, etc

    Stability of the Minimum Energy Path

    Full text link
    The minimum energy path (MEP) is the most probable transition path that connects two equilibrium states of a potential energy landscape. It has been widely used to study transition mechanisms as well as transition rates in the fields of chemistry, physics, and materials science. % In this paper, we derive a novel result establishing the stability of MEPs under perturbations of the energy landscape. The result also represents a crucial step towards studying the convergence of numerical discretisations of MEPs

    RepBNN: towards a precise Binary Neural Network with Enhanced Feature Map via Repeating

    Full text link
    Binary neural network (BNN) is an extreme quantization version of convolutional neural networks (CNNs) with all features and weights mapped to just 1-bit. Although BNN saves a lot of memory and computation demand to make CNN applicable on edge or mobile devices, BNN suffers the drop of network performance due to the reduced representation capability after binarization. In this paper, we propose a new replaceable and easy-to-use convolution module RepConv, which enhances feature maps through replicating input or output along channel dimension by β\beta times without extra cost on the number of parameters and convolutional computation. We also define a set of RepTran rules to use RepConv throughout BNN modules like binary convolution, fully connected layer and batch normalization. Experiments demonstrate that after the RepTran transformation, a set of highly cited BNNs have achieved universally better performance than the original BNN versions. For example, the Top-1 accuracy of Rep-ReCU-ResNet-20, i.e., a RepBconv enhanced ReCU-ResNet-20, reaches 88.97% on CIFAR-10, which is 1.47% higher than that of the original network. And Rep-AdamBNN-ReActNet-A achieves 71.342% Top-1 accuracy on ImageNet, a fresh state-of-the-art result of BNNs. Code and models are available at:https://github.com/imfinethanks/Rep_AdamBNN.Comment: This paper has absolutely nothing to do with repvgg, rep means repeatin

    Reliability evaluation of reservoir bank slopes with weak interlayers considering spatial variability

    Get PDF
    Reservoir bank slopes with weak interlayers are common in the Three Gorges Reservoir area. Their stabilities are affected by multi-coupled factors (e.g., reservoir water fluctuations, rainfall, and earthquakes in the reservoir area). Meanwhile, the differences in mechanical parameters of reservoir banks make it more difficult to determine the dynamic stability of bank slopes under complex mechanical environments. In this paper, the multiple disaster-causing factors and spatial variability of the landslide were comprehensively considered to study the long-term evolution trend of the bank slopes with weak interlayers. Specifically, the limit equilibrium method combined with the random field was performed to calculate the reliability. Furthermore, the long-term effects of dry-wet cycles on reservoir bank landslides and the sensitivity analysis of the statistical parameters of the random field were discussed. The results show that the earthquake action had the most significant impact on the failure probability of the landslide. The failure probability was more significantly affected by the vertical fluctuation range of the parameters and the coefficient of variation of the internal friction angle. The increase in failure probability under the action of dry-wet cycles was mainly caused by the reduction of the parameters of the weak interlayer. The reliability evaluation method of reservoir bank slopes can be applied to predict the long-term stability of the coastal banks

    SafetyBench: Evaluating the Safety of Large Language Models with Multiple Choice Questions

    Full text link
    With the rapid development of Large Language Models (LLMs), increasing attention has been paid to their safety concerns. Consequently, evaluating the safety of LLMs has become an essential task for facilitating the broad applications of LLMs. Nevertheless, the absence of comprehensive safety evaluation benchmarks poses a significant impediment to effectively assess and enhance the safety of LLMs. In this work, we present SafetyBench, a comprehensive benchmark for evaluating the safety of LLMs, which comprises 11,435 diverse multiple choice questions spanning across 7 distinct categories of safety concerns. Notably, SafetyBench also incorporates both Chinese and English data, facilitating the evaluation in both languages. Our extensive tests over 25 popular Chinese and English LLMs in both zero-shot and few-shot settings reveal a substantial performance advantage for GPT-4 over its counterparts, and there is still significant room for improving the safety of current LLMs. We believe SafetyBench will enable fast and comprehensive evaluation of LLMs' safety, and foster the development of safer LLMs. Data and evaluation guidelines are available at https://github.com/thu-coai/SafetyBench. Submission entrance and leaderboard are available at https://llmbench.ai/safety.Comment: 15 page

    xQSM: Quantitative Susceptibility Mapping with Octave Convolutional and Noise Regularized Neural Networks

    Full text link
    Quantitative susceptibility mapping (QSM) is a valuable magnetic resonance imaging (MRI) contrast mechanism that has demonstrated broad clinical applications. However, the image reconstruction of QSM is challenging due to its ill-posed dipole inversion process. In this study, a new deep learning method for QSM reconstruction, namely xQSM, was designed by introducing modified state-of-the-art octave convolutional layers into the U-net backbone. The xQSM method was compared with recentlyproposed U-net-based and conventional regularizationbased methods, using peak signal to noise ratio (PSNR), structural similarity (SSIM), and region-of-interest measurements. The results from a numerical phantom, a simulated human brain, four in vivo healthy human subjects, a multiple sclerosis patient, a glioblastoma patient, as well as a healthy mouse brain showed that the xQSM led to suppressed artifacts than the conventional methods, and enhanced susceptibility contrast, particularly in the ironrich deep grey matter region, than the original U-net, consistently. The xQSM method also substantially shortened the reconstruction time from minutes using conventional iterative methods to only a few seconds.Comment: 37 pages, 10 figures, 3 tabl
    corecore